Autonomous driving technology is advancing rapidly, particularly in vision-based approaches that use cameras to perceive the driving environment, which is the most humanlike perception method. However, one of the key challenges that smart vehicles face is adapting to various weather conditions, which can significantly impact visual perception and vehicular control strategies. The ideal design for the latter is to dynamically adjust in real time to ensure safe and efficient driving, taking into account the prevailing weather conditions. In this study, we propose a lightweight weather perception model that incorporates multi-scale feature learning, channel attention mechanisms, and a soft voting ensemble strategy. This enables the model to capture various visual patterns, emphasize critical information, and integrate predictions across multiple modules for improved robustness. Benchmark comparisons are conducted using several well-known deep learning networks, including EfficientNet-B0, ResNet50, SqueezeNet, MobileNetV3-Large, MobileNetV3-Small, and LSKNet. Finally, using both public datasets and real-world video recordings from roads in Taiwan, our model demonstrates superior computational efficiency while maintaining high predictive accuracy. For example, our model achieves 98.07% classification accuracy with only 0.4 million parameters and 0.19 GFLOPs, surpassing several well-known CNNs in computational efficiency. Compared with EfficientNet-B0, which has a similar accuracy (98.37%) but requires over ten times more parameters and four times more FLOPs, our model offers a much lighter and faster alternative.
Loading....